Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50.362
Filtrar
1.
Sci Data ; 11(1): 366, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38605079

RESUMO

Radiomics features (RFs) studies have showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. The provided CadAIver dataset has the aims of evaluating how CT scanner parameters effect radiomics features on cadaveric donor. The dataset comprises 112 unique CT acquisitions of a cadaveric truck acquired on 3 different CT scanners varying KV, mA, field-of-view, and reconstruction kernel settings. Technical validation of the CadAIver dataset comprises a comprehensive univariate and multivariate GLM approach to assess stability of each RFs extracted from lumbar vertebrae. The complete dataset is publicly available to be applied for future research in the RFs field, and could foster the creation of a collaborative open CT image database to increase the sample size, the range of available scanners, and the available body districts.


Assuntos
Vértebras Lombares , Tomografia Computadorizada por Raios X , Humanos , Cadáver , Processamento de Imagem Assistida por Computador/métodos , Vértebras Lombares/diagnóstico por imagem , 60570 , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
2.
Sci Rep ; 14(1): 8738, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627421

RESUMO

Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.


Assuntos
Neoplasias Encefálicas , Glioblastoma , Adulto , Criança , Humanos , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Algoritmos
3.
Sci Rep ; 14(1): 8504, 2024 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-38605094

RESUMO

This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
4.
Comput Biol Med ; 173: 108377, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569233

RESUMO

Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.


Assuntos
Hemodinâmica , Aumento da Imagem , Aumento da Imagem/métodos , Crânio/diagnóstico por imagem , Fluxo Sanguíneo Regional/fisiologia , Cabeça , Processamento de Imagem Assistida por Computador/métodos
5.
Comput Biol Med ; 173: 108390, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569234

RESUMO

Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.


Assuntos
Imageamento Tridimensional , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Radiografia , Neoplasias/diagnóstico por imagem , Respiração , Processamento de Imagem Assistida por Computador/métodos
6.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38571030

RESUMO

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Assuntos
Algoritmos , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Retina/diagnóstico por imagem , Cintilografia , Processamento de Imagem Assistida por Computador/métodos
7.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589478

RESUMO

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Assuntos
Algoritmos , Aprendizado Profundo , Água , Calibragem , Imageamento por Ressonância Magnética/métodos , Músculos/diagnóstico por imagem , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Encéfalo
8.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589793

RESUMO

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Assuntos
Aprendizado Profundo , Humanos , Curadoria de Dados , Leucócitos , Redes Neurais de Computação , Células Sanguíneas , Processamento de Imagem Assistida por Computador/métodos
9.
PLoS One ; 19(4): e0299099, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38564618

RESUMO

Individual muscle segmentation is the process of partitioning medical images into regions representing each muscle. It can be used to isolate spatially structured quantitative muscle characteristics, such as volume, geometry, and the level of fat infiltration. These features are pivotal to measuring the state of muscle functional health and in tracking the response of the body to musculoskeletal and neuromusculoskeletal disorders. The gold standard approach to perform muscle segmentation requires manual processing of large numbers of images and is associated with significant operator repeatability issues and high time requirements. Deep learning-based techniques have been recently suggested to be capable of automating the process, which would catalyse research into the effects of musculoskeletal disorders on the muscular system. In this study, three convolutional neural networks were explored in their capacity to automatically segment twenty-three lower limb muscles from the hips, thigh, and calves from magnetic resonance images. The three neural networks (UNet, Attention UNet, and a novel Spatial Channel UNet) were trained independently with augmented images to segment 6 subjects and were able to segment the muscles with an average Relative Volume Error (RVE) between -8.6% and 2.9%, average Dice Similarity Coefficient (DSC) between 0.70 and 0.84, and average Hausdorff Distance (HD) between 12.2 and 46.5 mm, with performance dependent on both the subject and the network used. The trained convolutional neural networks designed, and data used in this study are openly available for use, either through re-training for other medical images, or application to automatically segment new T1-weighted lower limb magnetic resonance images captured with similar acquisition parameters.


Assuntos
Aprendizado Profundo , Humanos , Feminino , Animais , Bovinos , Processamento de Imagem Assistida por Computador/métodos , Pós-Menopausa , Coxa da Perna/diagnóstico por imagem , Músculos , Imageamento por Ressonância Magnética/métodos
10.
Biomed Eng Online ; 23(1): 39, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38566181

RESUMO

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis. METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images. RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively. CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.


Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Veia Cava Superior , Ultrassonografia , Ultrassonografia Pré-Natal/métodos , Coração Fetal/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
11.
PLoS One ; 19(4): e0298287, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38593135

RESUMO

Cryo-electron micrograph images have various characteristics such as varying sizes, shapes, and distribution densities of individual particles, severe background noise, high levels of impurities, irregular shapes, blurred edges, and similar color to the background. How to demonstrate good adaptability in the field of image vision by picking up single particles from multiple types of cryo-electron micrographs is currently a challenge in the field of cryo-electron micrographs. This paper combines the characteristics of the MixUp hybrid enhancement algorithm, enhances the image feature information in the pre-processing stage, builds a feature perception network based on the channel self-attention mechanism in the forward network of the Swin Transformer model network, achieving adaptive adjustment of self-attention mechanism between different single particles, increasing the network's tolerance to noise, Incorporating PReLU activation function to enhance information exchange between pixel blocks of different single particles, and combining the Cross-Entropy function with the softmax function to construct a classification network based on Swin Transformer suitable for cryo-electron micrograph single particle detection model (Swin-cryoEM), achieving mixed detection of multiple types of single particles. Swin-cryoEM algorithm can better solve the problem of good adaptability in picking single particles of many types of cryo-electron micrographs, improve the accuracy and generalization ability of the single particle picking method, and provide high-quality data support for the three-dimensional reconstruction of a single particle. In this paper, ablation experiments and comparison experiments were designed to evaluate and compare Swin-cryoEM algorithms in detail and comprehensively on multiple datasets. The Average Precision is an important evaluation index of the evaluation model, and the optimal Average Precision reached 95.5% in the training stage Swin-cryoEM, and the single particle picking performance was also superior in the prediction stage. This model inherits the advantages of the Swin Transformer detection model and is superior to mainstream models such as Faster R-CNN and YOLOv5 in terms of the single particle detection capability of cryo-electron micrographs.


Assuntos
Algoritmos , Elétrons , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos
12.
J Biomed Opt ; 29(Suppl 2): S22706, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38638450

RESUMO

Significance: Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms. Aim: We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images. Approach: The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x-y plane. Leveraging cell continuity in consecutive z-stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process. Results: The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools. Conclusion: Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , 60704 , Algoritmos , Imagem Óptica
13.
Physiol Meas ; 45(4)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38565126

RESUMO

Objective.The objective of this study was to propose a novel data-driven method for solving ill-posed inverse problems, particularly in certain conditions such as time-difference electrical impedance tomography for detecting the location and size of bubbles inside a pipe.Approach.We introduced a new layer architecture composed of three paths: spatial, spectral, and truncated spectral paths. The spatial path processes information locally, whereas the spectral and truncated spectral paths provide the network with a global receptive field. This unique architecture helps eliminate the ill-posedness and nonlinearity inherent in the inverse problem. The three paths were designed to be interconnected, allowing for an exchange of information on different receptive fields with varied learning abilities. Our network has a bottleneck architecture that enables it to recover signal information from noisy redundant measurements. We named our proposed model truncated spatial-spectral convolutional neural network (TSS-ConvNet).Main results.Our model demonstrated superior accuracy with relatively high resolution on both simulation and experimental data. This indicates that our approach offers significant potential for addressing ill-posed inverse problems in complex conditions effectively and accurately.Significance.The TSS-ConvNet overcomes the receptive field limitation found in most existing models that only utilize local information in Euclidean space. We trained the network on a large dataset covering various configurations with random parameters to ensure generalization over the training samples.


Assuntos
Tomografia Computadorizada por Raios X , Tomografia , Tomografia/métodos , Impedância Elétrica , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
14.
Biomed Phys Eng Express ; 10(3)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38579691

RESUMO

Background.Modern radiation therapy technologies aim to enhance radiation dose precision to the tumor and utilize hypofractionated treatment regimens. Verifying the dose distributions associated with these advanced radiation therapy treatments remains an active research area due to the complexity of delivery systems and the lack of suitable three-dimensional dosimetry tools. Gel dosimeters are a potential tool for measuring these complex dose distributions. A prototype tabletop solid-tank fan-beam optical CT scanner for readout of gel dosimeters was recently developed. This scanner does not have a straight raypath from source to detector, thus images cannot be reconstructed using filtered backprojection (FBP) and iterative techniques are required.Purpose.To compare a subset of the top performing algorithms in terms of image quality and quantitatively determine the optimal algorithm while accounting for refraction within the optical CT system. The following algorithms were compared: Landweber, superiorized Landweber with the fast gradient projection perturbation routine (S-LAND-FGP), the fast iterative shrinkage/thresholding algorithm with total variation penalty term (FISTA-TV), a monotone version of FISTA-TV (MFISTA-TV), superiorized conjugate gradient with the nonascending perturbation routine (S-CG-NA), superiorized conjugate gradient with the fast gradient projection perturbation routine (S-CG-FGP), superiorized conjugate gradient with with two iterations of CG performed on the current iterate and the nonascending perturbation routine (S-CG-2-NA).Methods.A ray tracing simulator was developed to track the path of light rays as they traverse the different mediums of the optical CT scanner. Two clinical phantoms and several synthetic phantoms were produced and used to evaluate the reconstruction techniques under known conditions. Reconstructed images were analyzed in terms of spatial resolution, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), signal non-uniformity (SNU), mean relative difference (MRD) and reconstruction time. We developed an image quality based method to find the optimal stopping iteration window for each algorithm. Imaging data from the prototype optical CT scanner was reconstructed and analysed to determine the optimal algorithm for this application.Results.The optimal algorithms found through the quantitative scoring metric were FISTA-TV and S-CG-2-NA. MFISTA-TV was found to behave almost identically to FISTA-TV however MFISTA-TV was unable to resolve some of the synthetic phantoms. S-CG-NA showed extreme fluctuations in the SNR and CNR values. S-CG-FGP had large fluctuations in the SNR and CNR values and the algorithm has less noise reduction than FISTA-TV and worse spatial resolution than S-CG-2-NA. S-LAND-FGP had many of the same characteristics as FISTA-TV; high noise reduction and stability from over iterating. However, S-LAND-FGP has worse SNR, CNR and SNU values as well as longer reconstruction time. S-CG-2-NA has superior spatial resolution to all algorithms while still maintaining good noise reduction and is uniquely stable from over iterating.Conclusions.Both optimal algorithms (FISTA-TV and S-CG-2-NA) are stable from over iterating and have excellent edge detection with ESF MTF 50% values of 1.266 mm-1and 0.992 mm-1. FISTA-TV had the greatest noise reduction with SNR, CNR and SNU values of 424, 434 and 0.91 × 10-4, respectively. However, low spatial resolution makes FISTA-TV only viable for large field dosimetry. S-CG-2-NA has better spatial resolution than FISTA-TV with PSF and LSF MTF 50% values of 1.581 mm-1and 0.738 mm-1, but less noise reduction. S-CG-2-NA still maintains good SNR, CNR, and SNU values of 168, 158 and 1.13 × 10-4, respectively. Thus, S-CG-2-NA is a well rounded reconstruction algorithm that would be the preferable choice for small field dosimetry.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Radiometria/métodos , Razão Sinal-Ruído , Algoritmos
15.
Med Image Anal ; 94: 103153, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569380

RESUMO

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Assuntos
Diabetes Mellitus , Pé Diabético , Humanos , Pé Diabético/diagnóstico por imagem , Redes Neurais de Computação , Benchmarking , Processamento de Imagem Assistida por Computador/métodos
16.
Med Image Anal ; 94: 103158, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569379

RESUMO

Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at a fixed scaling factor, which is not friendly to clinical scenes of varying inter-slice spacing in MR scanning. Inspired by the recent progress in implicit neural representation, we propose a Spatial Attention-based Implicit Neural Representation (SA-INR) network for arbitrary reduction of MR inter-slice spacing. The SA-INR aims to represent an MR image as a continuous implicit function of 3D coordinates. In this way, the SA-INR can reconstruct the MR image with arbitrary inter-slice spacing by continuously sampling the coordinates in 3D space. In particular, a local-aware spatial attention operation is introduced to model nearby voxels and their affinity more accurately in a larger receptive field. Meanwhile, to improve the computational efficiency, a gradient-guided gating mask is proposed for applying the local-aware spatial attention to selected areas only. We evaluate our method on the public HCP-1200 dataset and the clinical knee MR dataset to demonstrate its superiority over other existing methods.


Assuntos
Diagnóstico por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Articulação do Joelho , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
17.
Med Image Anal ; 94: 103149, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574542

RESUMO

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Assuntos
Corantes , Neoplasias , Humanos , Corantes/química , Coloração e Rotulagem , Algoritmos , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador/métodos
18.
Med Image Anal ; 94: 103157, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574544

RESUMO

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Assuntos
Diagnóstico por Computador , Redes Neurais de Computação , Humanos , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal , Processamento de Imagem Assistida por Computador/métodos
19.
PLoS One ; 19(4): e0301978, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38603674

RESUMO

Radiomic features are usually used to predict target variables such as the absence or presence of a disease, treatment response, or time to symptom progression. One of the potential clinical applications is in patients with Parkinson's disease. Robust radiomic features for this specific imaging method have not yet been identified, which is necessary for proper feature selection. Thus, we are assessing the robustness of radiomic features in dopamine transporter imaging (DaT). For this study, we made an anthropomorphic head phantom with tissue heterogeneity using a personal 3D printer (polylactide 82% infill); the bone was subsequently reproduced with plaster. A surgical cotton ball with radiotracer (123I-ioflupane) was inserted. Scans were performed on the two-detector hybrid camera with acquisition parameters corresponding to international guidelines for DaT single photon emission tomography (SPECT). Reconstruction of SPECT was performed on a clinical workstation with iterative algorithms. Open-source LifeX software was used to extract 134 radiomic features. Statistical analysis was made in RStudio using the intraclass correlation coefficient (ICC) and coefficient of variation (COV). Overall, radiomic features in different reconstruction parameters showed a moderate reproducibility rate (ICC = 0.636, p <0.01). Assessment of ICC and COV within CT attenuation correction (CTAC) and non-attenuation correction (NAC) groups and within particular feature classes showed an excellent reproducibility rate (ICC > 0.9, p < 0.01), except for an intensity-based NAC group, where radiomic features showed a good repeatability rate (ICC = 0.893, p <0.01). By our results, CTAC becomes the main threat to feature stability. However, many radiomic features were sensitive to the selected reconstruction algorithm irrespectively to the attenuation correction. Radiomic features extracted from DaT-SPECT showed moderate to excellent reproducibility rates. These results make them suitable for clinical practice and human studies, but awareness of feature selection should be held, as some radiomic features are more robust than others.


Assuntos
Proteínas da Membrana Plasmática de Transporte de Dopamina , Nortropanos , 60570 , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão de Fóton Único , Computadores
20.
PLoS One ; 19(4): e0301132, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38626138

RESUMO

Magnetic Resonance Imaging (MRI) datasets from epidemiological studies often show a lower prevalence of motion artifacts than what is encountered in clinical practice. These artifacts can be unevenly distributed between subject groups and studies which introduces a bias that needs addressing when augmenting data for machine learning purposes. Since unreconstructed multi-channel k-space data is typically not available for population-based MRI datasets, motion simulations must be performed using signal magnitude data. There is thus a need to systematically evaluate how realistic such magnitude-based simulations are. We performed magnitude-based motion simulations on a dataset (MR-ART) from 148 subjects in which real motion-corrupted reference data was also available. The similarity of real and simulated motion was assessed by using image quality metrics (IQMs) including Coefficient of Joint Variation (CJV), Signal-to-Noise-Ratio (SNR), and Contrast-to-Noise-Ratio (CNR). An additional comparison was made by investigating the decrease in the Dice-Sørensen Coefficient (DSC) of automated segmentations with increasing motion severity. Segmentation of the cerebral cortex was performed with 6 freely available tools: FreeSurfer, BrainSuite, ANTs, SAMSEG, FastSurfer, and SynthSeg+. To better mimic the real subject motion, the original motion simulation within an existing data augmentation framework (TorchIO), was modified. This allowed a non-random motion paradigm and phase encoding direction. The mean difference in CJV/SNR/CNR between the real motion-corrupted images and our modified simulations (0.004±0.054/-0.7±1.8/-0.09±0.55) was lower than that of the original simulations (0.015±0.061/0.2±2.0/-0.29±0.62). Further, the mean difference in the DSC between the real motion-corrupted images was lower for our modified simulations (0.03±0.06) compared to the original simulations (-0.15±0.09). SynthSeg+ showed the highest robustness towards all forms of motion, real and simulated. In conclusion, reasonably realistic synthetic motion artifacts can be induced on a large-scale when only magnitude MR images are available to obtain unbiased data sets for the training of machine learning based models.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Movimento (Física) , Encéfalo/diagnóstico por imagem , Córtex Cerebral , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...